-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
affinity propagation: add comparison test with sklearn implementation #132
Conversation
this implementation seems more clear: I'll try to reimplement it if I find some time. |
Thanks for your work! Am I right that by "python comparison" you mean sklearn implementation?
You can try it, but keep in mind that there should be clear benefits for the package to switch to it:
Actually, you would already need to have some tests, when/if you will be writing your implementation of the method. The current state of affprop tests in master is obsviously unsatisfactory, because there are no tests for the correctness of generated clusters.
In all these test cases the point coordinates should be fixed (not randomly generated), so the "human eye" can easily see the expected result. The current test of 500 random points in 10 dimensions should come the last.
|
good point, I just tried computing rand index for validation. Current implementation is correct. We can still merge this PR to keep the test though? here is what I use for python implementation: using PyCall
@pyimport sklearn.cluster as cl
af = cl.AffinityPropagation(affinity="precomputed")[:fit](similarityMatrix)
labels = af[:labels_] .+ 1 |
@jingpengw Thanks for checking this! I think it makes sense to merge with minor corrections:
|
updated, please take a look. |
see issue:
#131